statistical test
Machine Learning for Network Attacks Classification and Statistical Evaluation of Adversarial Learning Methodologies for Synthetic Data Generation
Zarkadis, Iakovos-Christos, Douligeris, Christos
Supervised detection of network attacks has always been a critical part of network intrusion detection systems (NIDS). Nowadays, in a pivotal time for artificial intelligence (AI), with even more sophisticated attacks that utilize advanced techniques, such as generative artificial intelligence (GenAI) and reinforcement learning, it has become a vital component if we wish to protect our personal data, which are scattered across the web. In this paper, we address two tasks, in the first unified multi-modal NIDS dataset, which incorporates flow-level data, packet payload information and temporal contextual features, from the reprocessed CIC-IDS-2017, CIC-IoT-2023, UNSW-NB15 and CIC-DDoS-2019, with the same feature space. In the first task we use machine learning (ML) algorithms, with stratified cross validation, in order to prevent network attacks, with stability and reliability. In the second task we use adversarial learning algorithms to generate synthetic data, compare them with the real ones and evaluate their fidelity, utility and privacy using the SDV framework, f-divergences, distinguishability and non-parametric statistical tests. The findings provide stable ML models for intrusion detection and generative models with high fidelity and utility, by combining the Synthetic Data Vault framework, the TRTS and TSTR tests, with non-parametric statistical tests and f-divergence measures.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.90)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.34)
- Asia > China (0.04)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- Oceania > Australia > Queensland (0.04)
- (3 more...)
- Health & Medicine (0.46)
- Information Technology (0.46)
Calibration Bands for Mean Estimates within the Exponential Dispersion Family
Delong, Łukasz, Gatti, Selim, Wüthrich, Mario V.
Calibration Bands for Mean Estimates within the Exponential Dispersion Family null Lukasz Delong Selim Gatti Mario V. W uthrich Version of October 8, 2025 Abstract A statistical model is said to be calibrated if the resulting mean estimates perfectly match the true means of the underlying responses. Aiming for calibration is often not achievable in practice as one has to deal with finite samples of noisy observations. A weaker notion of calibration is auto-calibration. An auto-calibrated model satisfies that the expected value of the responses for a given mean estimate matches this estimate. Testing for auto-calibration has only been considered recently in the literature and we propose a new approach based on calibration bands. Calibration bands denote a set of lower and upper bounds such that the probability that the true means lie simultaneously inside those bounds exceeds some given confidence level. Such bands were constructed by Yang-Barber (2019) for sub-Gaussian distributions. Dimitriadis et al. (2023) then introduced narrower bands for the Bernoulli distribution. We use the same idea in order to extend the construction to the entire exponential dispersion family that contains for example the binomial, Poisson, negative binomial, gamma and normal distributions. Moreover, we show that the obtained calibration bands allow us to construct various tests for calibration and auto-calibration, respectively. As the construction of the bands does not rely on asymptotic results, we emphasize that our tests can be used for any sample size. Auto-calibration, calibration, calibration bands, exponential dispersion family, mean estimation, regression modeling, binomial distribution, Poisson distribution, negative binomial distribution, gamma distribution, normal distribution inverse Gaussian distribution. 1 Introduction Various statistical methods can be used to derive mean estimates from available observations, and it is important to understand whether these mean estimates are reliable for decision making. A statistical model is said to be calibrated if the resulting mean estimates perfectly match the true means of the underlying responses. In practice, calibration is often not achievable, as estimates are obtained from finite samples of noisy observations.
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.83)
Accuracy Law for the Future of Deep Time Series Forecasting
Wang, Yuxuan, Wu, Haixu, Ma, Yuezhou, Fang, Yuchen, Zhang, Ziyi, Liu, Yong, Wang, Shiyu, Ye, Zhou, Xiang, Yang, Wang, Jianmin, Long, Mingsheng
Deep time series forecasting has emerged as a booming direction in recent years. Despite the exponential growth of community interests, researchers are sometimes confused about the direction of their efforts due to minor improvements on standard benchmarks. In this paper, we notice that, unlike image recognition, whose well-acknowledged and realizable goal is 100% accuracy, time series forecasting inherently faces a non-zero error lower bound due to its partially observable and uncertain nature. To pinpoint the research objective and release researchers from saturated tasks, this paper focuses on a fundamental question: how to estimate the performance upper bound of deep time series forecasting? Going beyond classical series-wise predictability metrics, e.g., ADF test, we realize that the forecasting performance is highly related to window-wise properties because of the sequence-to-sequence forecasting paradigm of deep time series models. Based on rigorous statistical tests of over 2,800 newly trained deep forecasters, we discover a significant exponential relationship between the minimum forecasting error of deep models and the complexity of window-wise series patterns, which is termed the accuracy law. The proposed accuracy law successfully guides us to identify saturated tasks from widely used benchmarks and derives an effective training strategy for large time series models, offering valuable insights for future research. Despite these advancements, we notice that the latest proposed models have shown minor improvements on existing widely used benchmarks. As presented in Figure 1, the improvement in the performance of deep time series models on four standard benchmarks has slowed significantly over the past three years. For instance, on the ETT benchmark (Zhou et al., 2021), the relative forecasting performance improvements exhibited a continuous downward trend from 2022 to 2025, with values of 14.98%, 7.77%, 3.93%, and 3.51% respectively.
- Asia > Middle East > Israel (0.04)
- Asia > China (0.04)
- Government (0.46)
- Law (0.46)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
The authors discuss how the problems can be formulated as optimization of objective functions defined on the subgraphs. A straightforward search over the subgraphs is computationally infeasible, so the authors present a highly novel approach that leads to computationally efficient tests. The paper includes proofs that the tests are nearly minimax optimal for the exponential family of distributions and graphs satisfying the polynomial growth property. The paper concludes with an analysis of synthetic and real datasets. Strengths: (1) The paper addresses a problem of growing importance and presents novel approaches for statistical tests.
- Summary/Review (1.00)
- Research Report > Promising Solution (0.55)
- Overview > Innovation (0.55)